3 research outputs found

    Subtyping with Generics: A Unified Approach

    Get PDF
    Reusable software increases programmers\u27 productivity and reduces repetitive code and software bugs. Variance is a key programming language mechanism for writing reusable software. Variance is concerned with the interplay of parametric polymorphism (i.e., templates, generics) and subtype (inclusion) polymorphism. Parametric polymorphism enables programmers to write abstract types and is known to enhance the readability, maintainability, and reliability of programs. Subtyping promotes software reuse by allowing code to be applied to a larger set of terms. Integrating parametric and subtype polymorphism while maintaining type safety is a difficult problem. Existing variance mechanisms enable greater subtyping between parametric types, but they suffer from severe deficiencies. They are unable to express several common type abstractions. They can cause a proliferation of types and redundant code. They are difficult for programmers to use due to its inherent complexity. This dissertation aims to improve variance mechanisms in programming languages supporting parametric polymorphism. To address the shortcomings of current mechanisms, I will combine two popular approaches, definition-site variance and use-site variance, in a single programming language. I have developed formal languages or calculi for reasoning about variance. The calculi are example languages supporting both notions of definition-site and use-site variance. They enable stating precise properties that can be proved rigorously. The VarLang calculus demonstrates fundamental issues in variance from a language neutral perspective. The VarJ calculus illustrates realistic complications by modeling a mainstream programming language, Java. VarJ not only supports both notions of use-site and definition-site variance but also language features with complex interactions with variance such as F-bounded polymorphism and wildcard capture. A mapping from Java to VarLang was implemented in software that infers definition-site variance for Java. Large, standard Java libraries (e.g. Oracle\u27s JDK 1.6) were analyzed using the software to compute metrics measuring the benefits of adding definition-site variance to Java, which only supports use-site variance. Applying this technique to six Java generic libraries shows that 21-47% (depending on the library) of generic definitions are inferred to have single-variance; 7-29% of method signatures can be relaxed through this inference, and up to 100% of existing wildcard annotations are unnecessary and can be elided. Although the VarJ calculus proposes how to extend Java with definition-site variance, no mainstream language currently supports both definition-site and use-site variance. To assist programmers with utilizing both notions with existing technology, I developed a refactoring tool that refactors Java code by inferring definition-site variance and adding wildcard annotations. This tool is practical and immediately applicable: It assumes no changes to the Java type system, while taking into account all its intricacies. This system allows users to select declarations (variables, method parameters, return types, etc.) to generalize and considers declarations not declared in available source code. I evaluated our technique on six Java generic libraries. I found that 34% of available declarations of variant type signatures can be generalized-i.e., relaxed with more general wildcard types. On average, 146 other declarations need to be updated when a declaration is generalized, showing that this refactoring would be too tedious and error-prone to perform manually. The result of applying this refactoring is a more general interface that supports greater software reuse

    TABLE OF CONTENTS

    No full text
    Dynamic computational complexity is the study of resource-bounded ongoing compu-tational processes. We consider the general problem of processing a sequence of inputs, instead of a single input. We introduce a new model for dynamic computation, and inves-tigate the computational complexity of various dynamic problems. The field of computational complexity has previously studied static computation, which takes a single fixed input and computes the desired result. We define a dynamic problem to be the function mapping a stream of data to the desired stream of output, and we inves-tigate the complexity of the dynamic computation required to compute that function. We describe complexity classes of dynamic problems, reductions between dynamic problems, and complete problems for dynamic complexity classes

    TABLE OF CONTENTS

    No full text
    Dynamic computational complexity is the study of resource-bounded ongoing compu-tational processes. We consider the general problem of processing a sequence of inputs, instead of a single input. We introduce a new model for dynamic computation, and inves-tigate the computational complexity of various dynamic problems. The field of computational complexity has previously studied static computation, which takes a single fixed input and computes the desired result. We define a dynamic problem to be the function mapping a stream of data to the desired stream of output, and we inves-tigate the complexity of the dynamic computation required to compute that function. We describe complexity classes of dynamic problems, reductions between dynamic problems, and complete problems for dynamic complexity classes
    corecore